Sketch-based modeling strives to bring the ease and immediacy of drawing tothe 3D world. However, while drawings are easy for humans to create, they arevery challenging for computers to interpret due to their sparsity andambiguity. We propose a data-driven approach that tackles this challenge bylearning to reconstruct 3D shapes from one or more drawings. At the core of ourapproach is a deep convolutional neural network (CNN) that predicts occupancyof a voxel grid from a line drawing. This CNN provides us with an initial 3Dreconstruction as soon as the user completes a single drawing of the desiredshape. We complement this single-view network with an updater CNN that refinesan existing prediction given a new drawing of the shape created from a novelviewpoint. A key advantage of our approach is that we can apply the updateriteratively to fuse information from an arbitrary number of viewpoints, withoutrequiring explicit stroke correspondences between the drawings. We train bothCNNs by rendering synthetic contour drawings from hand-modeled shapecollections as well as from procedurally-generated abstract shapes. Finally, weintegrate our CNNs in a minimal modeling interface that allows users toseamlessly draw an object, rotate it to see its 3D reconstruction, and refineit by re-drawing from another vantage point using the 3D reconstruction asguidance. The main strengths of our approach are its robustness to freehandbitmap drawings, its ability to adapt to different object categories, and thecontinuum it offers between single-view and multi-view sketch-based modeling.
展开▼